In [1]:
import pandas as pd
import numpy as np
# Set some Pandas options
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', 20)
pd.set_option('display.max_rows', 25)
There are a handful of third-party Python packages that are suitable for creating scientific plots and visualizations. These include packages like:
Here, we will focus excelusively on matplotlib and the high-level plotting availabel within pandas. It is currently the most robust and feature-rich package available.
We require plots, charts and other statistical graphics for the written communication of quantitative ideas.
They allow us to more easily convey relationships and reveal deviations from patterns.
Gelman and Unwin 2011:
A well-designed graph can display more information than a table of the same size, and more information than numbers embedded in text. Graphical displays allow and encourage direct visual comparisons.
The easiest way to interact with matplotlib is via pylab
in iPython. By starting iPython (or iPython notebook) in "pylab mode", both matplotlib and numpy are pre-loaded into the iPython session:
ipython notebook --pylab
You can specify a custom graphical backend (e.g. qt, gtk, osx), but iPython generally does a good job of auto-selecting. Now matplotlib is ready to go, and you can access the matplotlib API via plt
. If you do not start iPython in pylab mode, you can do this manually with the following convention:
import matplotlib.pyplot as plt
In [2]:
plt.plot(np.random.normal(size=100), np.random.normal(size=100), 'ro')
Out[2]:
The above plot simply shows two sets of random numbers taken from a normal distribution plotted against one another. The 'ro'
argument is a shorthand argument telling matplotlib that I wanted the points represented as red circles.
This plot was expedient. We can exercise a little more control by breaking the plotting into a workflow:
In [3]:
with mpl.rc_context(rc={'font.family': 'serif', 'font.weight': 'bold', 'font.size': 8}):
fig = plt.figure(figsize=(6,3))
ax1 = fig.add_subplot(121)
ax1.set_xlabel('some random numbers')
ax1.set_ylabel('more random numbers')
ax1.set_title("Random scatterplot")
plt.plot(np.random.normal(size=100), np.random.normal(size=100), 'r.')
ax2 = fig.add_subplot(122)
plt.hist(np.random.normal(size=100), bins=15)
ax2.set_xlabel('sample')
ax2.set_ylabel('cumulative sum')
ax2.set_title("Normal distrubution")
plt.tight_layout()
plt.savefig("normalvars.png", dpi=150)
matplotlib is a relatively low-level plotting package, relative to others. It makes very few assumptions about what constitutes good layout (by design), but has a lot of flexiblility to allow the user to completely customize the look of the output.
If you want to make your plots look pretty like mine, steal the matplotlibrc file from Huy Nguyen.
On the other hand, Pandas includes methods for DataFrame and Series objects that are relatively high-level, and that make reasonable assumptions about how the plot should look.
In [4]:
normals = pd.Series(np.random.normal(size=10))
normals.plot()
Out[4]:
Notice that by default a line plot is drawn, and a light grid is included. All of this can be changed, however:
In [5]:
normals.cumsum().plot(grid=False)
Out[5]:
Similarly, for a DataFrame:
In [6]:
variables = pd.DataFrame({'normal': np.random.normal(size=100),
'gamma': np.random.gamma(1, size=100),
'poisson': np.random.poisson(size=100)})
variables.cumsum(0).plot()
Out[6]:
As an illustration of the high-level nature of Pandas plots, we can split multiple series into subplots with a single argument for plot
:
In [7]:
variables.cumsum(0).plot(subplots=True)
Out[7]:
Or, we may want to have some series displayed on the secondary y-axis, which can allow for greater detail and less empty space:
In [8]:
variables.cumsum(0).plot(secondary_y='normal')
Out[8]:
If we would like a little more control, we can use matplotlib's subplots
function directly, and manually assign plots to its axes:
In [9]:
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(12, 4))
for i,var in enumerate(['normal','gamma','poisson']):
variables[var].cumsum(0).plot(ax=axes[i], title=var)
axes[0].set_ylabel('cumulative sum')
Out[9]:
In [10]:
titanic = pd.read_excel("data/titanic.xls", "titanic")
titanic.head()
Out[10]:
In [11]:
titanic.groupby('pclass').survived.sum().plot(kind='bar')
Out[11]:
In [12]:
titanic.groupby(['sex','pclass']).survived.sum().plot(kind='barh')
Out[12]:
In [13]:
death_counts = pd.crosstab([titanic.pclass, titanic.sex], titanic.survived.astype(bool))
death_counts.plot(kind='bar', stacked=True, color=['black','gold'], grid=False)
Out[13]:
Another way of comparing the groups is to look at the survival rate, by adjusting for the number of people in each group.
In [14]:
death_counts.div(death_counts.sum(1).astype(float), axis=0).plot(kind='barh', stacked=True, color=['black','gold'])
Out[14]:
Frequenfly it is useful to look at the distribution of data before you analyze it. Histograms are a sort of bar graph that displays relative frequencies of data values; hence, the y-axis is always some measure of frequency. This can either be raw counts of values or scaled proportions.
For example, we might want to see how the fares were distributed aboard the titanic:
In [19]:
titanic.fare.hist(grid=False)
Out[19]:
The hist
method puts the continuous fare values into bins, trying to make a sensible décision about how many bins to use (or equivalently, how wide the bins are). We can override the default value (10):
In [20]:
titanic.fare.hist(bins=30)
Out[20]:
There are algorithms for determining an "optimal" number of bins, each of which varies somehow with the number of observations in the data series.
In [21]:
sturges = lambda n: int(log2(n) + 1)
square_root = lambda n: int(sqrt(n))
from scipy.stats import kurtosis
doanes = lambda data: int(1 + log(len(data)) + log(1 + kurtosis(data) * (len(data) / 6.) ** 0.5))
n = len(titanic)
sturges(n), square_root(n), doanes(titanic.fare.dropna())
Out[21]:
In [22]:
titanic.fare.hist(bins=doanes(titanic.fare.dropna()))
Out[22]:
A density plot is similar to a histogram in that it describes the distribution of the underlying data, but rather than being a pure empirical representation, it is an estimate of the underlying "true" distribution. As a result, it is smoothed into a continuous line plot. We create them in Pandas using the plot
method with kind='kde'
, where kde
stands for kernel density estimate.
In [23]:
titanic.fare.dropna().plot(kind='kde', xlim=(0,600))
Out[23]:
Often, histograms and density plots are shown together:
In [24]:
titanic.fare.hist(bins=doanes(titanic.fare.dropna()), normed=True, color='lightseagreen')
titanic.fare.dropna().plot(kind='kde', xlim=(0,600), style='r--')
Out[24]:
Here, we had to normalize the histogram (normed=True
), since the kernel density is normalized by definition (it is a probability distribution).
We will explore kernel density estimates more in the next section.
In [25]:
titanic.boxplot(column='fare', by='pclass', grid=False)
Out[25]:
You can think of the box plot as viewing the distribution from above. The blue crosses are "outlier" points that occur outside the extreme quantiles.
One way to add additional information to a boxplot is to overlay the actual data; this is generally most suitable with small- or moderate-sized data series.
In [26]:
bp = titanic.boxplot(column='age', by='pclass', grid=False)
for i in [1,2,3]:
y = titanic.age[titanic.pclass==i].dropna()
# Add some random "jitter" to the x-axis
x = np.random.normal(i, 0.04, size=len(y))
plot(x, y, 'r.', alpha=0.2)
When data are dense, a couple of tricks used above help the visualization:
A related but inferior cousin of the box plot is the so-called dynamite plot, which is just a bar chart with half of an error bar.
In [27]:
titanic.groupby('pclass')['fare'].mean().plot(kind='bar', yerr=titanic.groupby('pclass')['fare'].std())
Out[27]:
Why is this plot a poor choice?
A boxplot is always a better choice than a dynamite plot.
In [28]:
data1 = [150, 155, 175, 200, 245, 255, 395, 300, 305, 320, 375, 400, 420, 430, 440]
data2 = [225, 380]
fake_data = pd.DataFrame([data1, data2]).transpose()
p = fake_data.mean().plot(kind='bar', yerr=fake_data.std(), grid=False)
In [29]:
fake_data = pd.DataFrame([data1, data2]).transpose()
p = fake_data.mean().plot(kind='bar', yerr=fake_data.std(), grid=False)
x1, x2 = p.xaxis.get_majorticklocs()
plot(np.random.normal(x1, 0.01, size=len(data1)), data1, 'ro')
plot([x2]*len(data2), data2, 'ro')
Out[29]:
In [ ]:
In [30]:
baseball = pd.read_csv("data/baseball.csv")
baseball.head()
Out[30]:
Scatterplots are useful for data exploration, where we seek to uncover relationships among variables. There are no scatterplot methods for Series or DataFrame objects; we must instead use the matplotlib function scatter
.
In [31]:
plt.scatter(baseball.ab, baseball.h)
xlim(0, 700); ylim(0, 200)
Out[31]:
We can add additional information to scatterplots by assigning variables to either the size of the symbols or their colors.
In [32]:
plt.scatter(baseball.ab, baseball.h, s=baseball.hr*10, alpha=0.5)
xlim(0, 700); ylim(0, 200)
Out[32]:
In [33]:
plt.scatter(baseball.ab, baseball.h, c=baseball.hr, s=40, cmap='hot')
xlim(0, 700); ylim(0, 200);
To view scatterplots of a large numbers of variables simultaneously, we can use the scatter_matrix
function that was recently added to Pandas. It generates a matrix of pair-wise scatterplots, optiorally with histograms or kernel density estimates on the diagonal.
In [35]:
_ = pd.scatter_matrix(baseball.loc[:,'r':'sb'], figsize=(12,8), diagonal='kde')
One of the enduring strengths of carrying out statistical analyses in the R language is the quality of its graphics. In particular, the addition of Hadley Wickham's ggplot2 package allows for flexible yet user-friendly generation of publication-quality plots. Its srength is based on its implementation of a powerful model of graphics, called the Grammar of Graphics (GofG). The GofG is essentially a theory of scientific graphics that allows the components of a graphic to be completely described. ggplot2 uses this description to build the graphic component-wise, by adding various layers.
Pandas recently added functions for generating graphics using a GofG approach. Chiefly, this allows for the easy creation of trellis plots, which are a faceted graphic that shows relationships between two variables, conditioned on particular values of other variables. This allows for the representation of more than two dimensions of information without having to resort to 3-D graphics, etc.
Let's use the titanic
dataset to create a trellis plot that represents 4 variables at a time. This consists of 4 steps:
RPlot
object that merely relates two variables in the dataset
In [36]:
from pandas.tools.rplot import *
titanic = titanic[titanic.age.notnull() & titanic.fare.notnull()]
tp = RPlot(titanic, x='age')
tp.add(TrellisGrid(['pclass', 'sex']))
tp.add(GeomDensity())
_ = tp.render(gcf())
Using the cervical dystonia dataset, we can simultaneously examine the relationship between age and the primary outcome variable as a function of both the treatment received and the week of the treatment by creating a scatterplot of the data, and fitting a polynomial relationship between age
and twstrs
:
In [37]:
cdystonia = pd.read_csv("data/cdystonia.csv", index_col=None)
cdystonia.head()
Out[37]:
In [38]:
plt.figure(figsize=(12,12))
bbp = RPlot(cdystonia, x='age', y='twstrs')
bbp.add(TrellisGrid(['week', 'treat']))
bbp.add(GeomScatter())
bbp.add(GeomPolyFit(degree=2))
_ = bbp.render(gcf())
We can use the RPlot
class to represent more than just trellis graphics. It is also useful for displaying multiple variables on the same panel, using combinations of color, size and shapes to do so.
In [39]:
cdystonia['site'] = cdystonia.site.astype(float)
In [40]:
plt.figure(figsize=(6,6))
cp = RPlot(cdystonia, x='age', y='twstrs')
cp.add(GeomPoint(colour=ScaleGradient('site', colour1=(1.0, 1.0, 0.5), colour2=(1.0, 0.0, 0.0)),
size=ScaleSize('week', min_size=10.0, max_size=200.0),
shape=ScaleShape('treat')))
_ = cp.render(gcf())